Jinja District
BooookScore: A systematic exploration of book-length summarization in the era of LLMs
Chang, Yapei, Lo, Kyle, Goyal, Tanya, Iyyer, Mohit
Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than the oft-repetitive ones generated by LLaMA 2. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by human annotators. We release code and annotations after blind review to spur more principled research on book-length summarization.
- Europe > United Kingdom (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- Africa > Uganda > Central Region > Kampala (0.04)
- (13 more...)
- Workflow (1.00)
- Research Report > New Finding (0.45)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine (1.00)
- (3 more...)
Image Classification Using SVMs: One-against-One Vs One-against-All
Anthony, Gidudu, Gregg, Hulley, Tshilidzi, Marwala
This has been made possible by advancements in satellite sensor technology thus enabling the acquisition of land cover information over large areas at various spatial, temporal spectral and radiometric resolutions. The process of relating pixels in a satellite image to known land cover is called image classification and the algorithms used to effect the classification process are called image classifiers (Mather, 1987). The extraction of land cover information from satellite images using image classifiers has been the subject of intense interest and research in the remote sensing community (Foody and Mathur, 2004b). Some of the traditional classifiers that have been in use in remote sensing studies include the maximum likelihood, minimum distance to means and the box classifier. As technology has advanced, new classification algorithms have become part of the main stream image classifiers such as decision trees and artificial neural networks. Studies have been made to compare these new techniques with the traditional ones and they have been observed to post improved classification accuracies (Peddle et al. 1994; Rogan et al. 2002; Li et al. 2003; Mahesh and Mather, 2003).
- North America > United States > New York (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)